2,011 research outputs found

    Kernel Tricks, Means and Ends

    Get PDF

    Learning with Kernels

    Get PDF

    Computing Functions of Random Variables via Reproducing Kernel Hilbert Space Representations

    Full text link
    We describe a method to perform functional operations on probability distributions of random variables. The method uses reproducing kernel Hilbert space representations of probability distributions, and it is applicable to all operations which can be applied to points drawn from the respective distributions. We refer to our approach as {\em kernel probabilistic programming}. We illustrate it on synthetic data, and show how it can be used for nonparametric structural equation models, with an application to causal inference

    Transductive Inference with Graphs

    Get PDF
    We propose a general regularization framework for transductive inference. The given data are thought of as a graph, where the edges encode the pairwise relationships among data. We develop discrete analysis and geometry on graphs, and then naturally adapt the classical regularization in the continuous case to the graph situation. A new and effective algorithm is derived from this general framework, as well as an approach we developed before

    Learning with Kernels

    Get PDF

    A Unifying View of Multiple Kernel Learning

    Full text link
    Recent research on multiple kernel learning has lead to a number of approaches for combining kernels in regularized risk minimization. The proposed approaches include different formulations of objectives and varying regularization strategies. In this paper we present a unifying general optimization criterion for multiple kernel learning and show how existing formulations are subsumed as special cases. We also derive the criterion's dual representation, which is suitable for general smooth optimization algorithms. Finally, we evaluate multiple kernel learning in this framework analytically using a Rademacher complexity bound on the generalization error and empirically in a set of experiments

    On group properties and reality conditions of UOSp(1|2) gauge transformations

    Full text link
    For osp(1|2;C) graded Lie algebra, which proper Lie subalgebra is su(2), we consider the Baker-Campbell-Hausdorff formula and formulate a reality condition for the Grassmann-odd transformation parameters that multiply the pair of odd generators of the graded Lie algebra. Utilization of su(2)-spinors clarifies the nature of Grassmann-odd transformation parameters and allow us an investigation of the corresponding infinitesimal gauge transformations. We also explore action of the corresponding group element of UOSp(1|2) on an appropriately graded representation space and find that the graded generalization of hermitian conjugation is compatible with the Dirac adjoint. Consistency of generalized (graded) unitary condition with the proposed reality condition is shown.Comment: 14 page

    About the Triangle Inequality in Perceptual Spaces

    No full text
    Perceptual similarity is often formalized as a metric in a multi-dimensional space. Stimuli are points in the space and stimuli that are similar are close to each other in this space. A large distance separates stimuli that are very different from each other. This conception of similarity prevails in studies from color perception and face perception to studies of categorization. While this notion of similarity is intuitively plausible there has been an intense debate in cognitive psychology whether perceived dissimilarity satisfies the metric axioms. In a seminal series of papers, Tversky and colleagues have challenged all of the metric axioms [1,2,3]. The triangle inequality has been the hardest of the metric axioms to test experimentally. The reason for this is that measurements of perceived dissimilarity are usually only on an ordinal scale, on an interval scale at most. Hence, the triangle inequality on a finite set of points can always be satisfied, trivially, by adding a big enough constant to the measurements. Tversky and Gati [3] found a way to test the triangle inequality in conjunction with a second, very common assumption. This assumption is segmental additivity [1]: The distance from A to C equals the distance from A to B plus the distance from B to C, if B is “on the way”. All of the metrics that had been suggested to model similarity also had this assumption of segmental additivity, be it the Euclidean metric, the Lp-metric, or any Riemannian geometry. Tversky and Gati collected a substantial amount of data using many different stimulus sets, ranging from perceptual to cognitive, and found strong evidence that many human similarity judgments cannot be accounted for by the usual models of similarity. This led them to the conclusion that either the triangle inequality has to be given up or one has to use metric models with subadditive metrics. They favored the first solution. Here, we present a principled subadditive metric based on Shepard’s universal law of generalization [4]. Instead of representing each stimulus as a point in a multi-dimensional space our subadditive metric stems from representing each stimulus by its similarity to all other stimuli in the space. This similarity function, as for example given by Shepard’s law, will usually be a radial basis function and also a positive definite kernel. Hence, there is a natural inner product defined by the kernel and a metric that is induced by the inner product. This metric is subadditive. In addition, this metric has the psychologically desirable property that the distance between stimuli is bounded

    A Primer on Kernel Methods

    Get PDF
    corecore